In the last lesson, we looked at the Compute Engine service. It has a lot of operations work to handle. Also, in the instance group, we looked at autoscaling and how to configure it. This lesson is all about Kubernetes and Google Kubernetes Engine.

Google Kubernetes Engine (GKE in short) is one step ahead in terms of minimizing the operations workload for users. As we know, GKE is suitable for containerized workloads. To understand the GKE in detail, let’s learn a little about Kubernetes.

Introduction to Kubernetes#

One lesson is not enough to explain the Kubernetes in detail. However, this lesson gives you a little high-level overview of Kubernetes.

Kubernetes is a container orchestration (management) tool. Created by Google, it was used to manage Google’s own infrastructure. Kubernetes is a giant server made up of small servers. Sometimes also called a Kubernetes Cluster.

Kubernetes Cluster
[Not supported by viewer]
Node
Node
           Container
[Not supported by viewer]
Pod
Pod
IP: X.X.X.X
IP: X.X.X.X
Node
Node
           Container
[Not supported by viewer]
Pod
Pod
IP: X.X.X.X
IP: X.X.X.X
Node
Node
           Container
[Not supported by viewer]
Pod
Pod
IP: X.X.X.X
IP: X.X.X.X
Node
Node
           Container
[Not supported by viewer]
Pod
Pod
IP: X.X.X.X
IP: X.X.X.X
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
N1-standard (1 CPU, 3.75 GB RAM)
Cluster of 4 Nodes

Terms#

Let’s understand the common terms of Kubernetes.

  • Pod: Pod is the smallest unit of deployment on Kubernetes. A pod can have multiple containers running inside it. These containers run and scale together and share Pod IP. Basically, the pod is a copy of our application.

  • Node: A Node is an actual VM instance. Docker and Kubernetes are preinstalled in these machines. Pods are run inside the machine. Depending upon the configuration a node can have multiple or single pod running at a time in it.

  • Services: Services are the endpoint for users. These are interfaces between a pod and the world. As the pod IP can change when a pod is restarted, the Service allows us to map our deployments to a specific endpoint.

  • Deployment: A deployment is a definition of the pod. A pod contains container and container is a code. So code can fail and that leads to the pod failure. So, the deployment has the number replicas of a pod in it and it is the responsibility of deployment to maintain the mentioned number of pods in the cluster up and running.

  • Daemonset: This type of controller is used to run Node level configurations. Daemonset makes sure that a copy of specific software runs on each node. For example, monitoring agents or New relic.

  • Secrets: Secret is the mechanism for storing sensitive data. We can pass on the sensitive data to the container at runtime using an environment variable or by mounting the data in a volume.

  • Configmaps: It is similar to secrets except it is used to store non-sensitive configurations of the code. This can also be accessed at runtime by a container.

Working#

High-level working of Kubernetes can be explained with the steps.

  1. We create a deployment.yaml file which includes the pod definition and number of replicas. For example, say we need 4 pods running at any point of the time. So, mentioning replica:4 will take care of that.

  2. Kubernetes reads the deployment file and create pods based on node availability.

  3. At any point, if a pod fails Kubernetes will create a new pod and remove the old one.

  4. Once we have pods up and running the pods are exposed using a service. Again, as the pod’s IP is ephemeral a service maps a permanent address to the pods so that a pod can be accessed irrespective of its current IP address.

This is Kubernetes in short. Kubernetes has a lot of components. And that is what most of the Kubernetes courses cover. Let’s look at the Kubernetes architecture and then you will see what is Kubernetes Engine?

Kubernetes Components
Kubernetes Components

We will not go through each component of K8 in this lesson. If you want you can read about them here. But these components need to be configured to use Kubernetes. And that’s extra overhead. Isn’t it be great to just use the Kubernetes’ main feature and outsource the operations or configuration part? And that is where Kubernetes Engine comes to the rescue.

Google Kubernetes engine#

GKE is Google’s way of running a Kubernetes cluster. GKE takes care of the management of the Kubernetes(K8) cluster. So, let’s see how we can create a cluster with a minimal amount of inputs via the form and using the CLI.

Creating a GKE cluster#

  1. Go to the Main menu > Kubernetes Engine > Clusters.

  2. Kubernetes Engine API needs to enabled for creating a GKE cluster. If API is not enabled automatically by Google cloud, the page will redirect itself to the GKE API. Enable the API.

Enable GKE API
Enable GKE API
  1. Click on the Create button. Choose Standard cluster by clicking on configure button.

Fields explanation

There are 3 sections to fill up to create a cluster.

  1. Cluster basics
  2. Node pools
  3. Cluster
Three sections to create a standard GKE cluster.
Three sections to create a standard GKE cluster.

Cluster basics

This includes cluster name, location, and the GKE master node version. The name can be a hyphen-separated small case alphanumeric value.

The location has 2 choices. Regional and Zonal. As usual, regional has more redundancy and availability. However, that increases the cost as well.

GKE version can be static or we can select a release channel to automatically have the latest GKE version for the cluster. The different ways to have the latest version are,

  • Rapid: Get the latest Kubernetes release as early as possible, and be able to use new GKE features the moment they are available for all.

  • Regular: 2-3 Months after releasing in Rapid.

  • Stable: 2-3 months after releasing in Regular.

Node Pool

Node pool is the configuration of the nodes in the cluster. It provides configuration for Node pool, Nodes, Node security, and metadata.

For pool configuration, we can provide the node pool name, the number of instances in the pool, and different flags like enabling auto-scaling and specifying master node location.

  • Nodes: Under nodes config, we can select the machine type and OS of the nodes. This is the same configuration we use for creating a compute engine. Except for the maximum number of pods field at the end.

  • Security: In the security section, we can select which service account the nodes should use. Also, the scope and default access to other services APIs. Let’s keep all the values to default as of now.

  • Metadata: This section is used to define labels and metadata of the clusters. These labels are applied to every Kubernetes node in this node pool. Kubernetes node labels can be used in node selectors to control how workloads are scheduled to your nodes. Let’s skip this section for now.

Cluster

The Cluster section provides cluster networking configuration for setting an advanced cluster like setting up a private cluster within a VPN. These settings are out of scope for this course. So, keep them default as of now. You can read about each of them but it is not required to clear the exam.

Once done click on the “Create” button and wait for 2-3 minutes.

Keep everything default and click on the CREATE button.
Keep everything default and click on the CREATE button.

Deploying workloads#

Now that we have the cluster ready, it is time to deploy some containerized workload or images to the cluster. We will deploy the Nginx webserver to the cluster.

Steps to deploy the workload are:

  • Go to the Workloads tab.
  • Click on the “Deploy” button.
  • Fill in the container details.
  • Fill in the configuration details.
  • Click the “Deploy” button.
widget

There are already some workloads running on the cluster. Click “Show system workloads” to see all types of workloads that are GKE specific like Deployments, Daemonsets, and pods running on the cluster.

Click on the Workloads tab and click the Deploy button. The form is pre-filled with a default sample container registered and ready for use. You can use different images from Google’s Container repository or your private container registry as well. But for this tutorial let’s go with the default one.

Keep the default values and click the "CONTINUE" button.
Keep the default values and click the "CONTINUE" button.

A pod can have multiple containers. Hence the option to add a container is provided. We don’t need it as of now. So, hit continue.

The next part is about configuration.

  • Application name: Name of the application. You can provide any alphanumeric name.

  • Namespace: Namespace logically separates the pods in the GKE node. You can organize pods within the GKE node using namespace. Let’s go with the default namespace.

  • Labels: Labels can be used to organize workloads.

  • Cluster: Workloads can be deployed to different clusters. Currently, the default cluster is selected that we have created earlier.

The namespace is important here. We can have different copies have code running in different namespaces.

Consider namespace as a separate space for running different versions of the code. We can have multiple pods having an identical name on the same node separated by a namespace.

Keep the default values for the configuration.

So leave all the fields to default and hit the “Deploy” button. This will create the pods on the Node. You will get a details page as soon as the Nginx pods are created. Scroll down to Managed Pods section to see them.

Keep the default configuration and click the "DEPLOY" button.
Keep the default configuration and click the "DEPLOY" button.

Creating Service#

Now that we have pods up and running, we need to expose this Nginx webserver to the outside world. Currently, this Nginx server is not accessible using the internet as no public IP present to access the server.

Since there are multiple pods running on the same node, we need something that can manage request allocation between all the pods. Also, if any pod is down then forward all requests to the healthy pods. The solution to this problem is Service.

A Service attaches an endpoint to the pods and load balance requests amongst them.

To create a service,

  1. Goto the nginx-1 workload.
  2. Scroll down to exposing services section.
  3. Click the “Expose” button.
widget

There are 3 types of services.

  1. ClusterIP: Exposes the service on a cluster-internal IP (the service is only reachable from within the cluster).

  2. NodePort: Exposes the service on each node’s IP. Also creates a ClusterIP service that the NodePort service will route to.

  3. LoadBalancer (default): Exposes the service externally using a load balancer. Also creates NodePort and ClusterIP services for the load balancer to route to.

Loadbalancer service always forwards the load to the Nodeport of each node present in the cluster. And node port will communicate with the cluster IP.

To understand the concept of these service types let’s look at one simple diagram.


<div style="text-align: left"><br></div>
Nginx pod
Nginx pod
Nginx pod
Nginx pod
Nginx pod
Nginx pod
 Node
[Not supported by viewer]
Node port
30296
[Not supported by viewer]
Cluster
<b>Cluster</b>
Load Balancer
146.148.111.225
[Not supported by viewer]
Cluster IP
10.8.4.153
Cluster IP<br>10.8.4.153
Service types explanation.

The external world can reach Nginx pods using the public IP assigned to the load balancer. Loadbalancer will reach out to the nodes using Nodeport and the node port will forward load to the cluster IP. The layered approach is to provide different types of access to the workloads.

So, let’s select the Load balancer service so that Nginx containers can be reached from the internet.

Types of services. Select the "Load balancer" service.
Types of services. Select the "Load balancer" service.

The “load balancer” service is the default and used for exposing a workload to the internet.

Leave all the fields default and click “Create”. This will create a service with an external IP address which we can use to reach to Nginx webserver. Use the IP address to see the Nginx homepage.

External IP
External IP

Configmaps and secrets#

Next, we have the Configuration tab. It is always recommended to separate out the Configuration from the application code. Kubernetes follow the same recommended ways. We can save our configs here in Configmap. Secrets and Configmaps are created using the command line option.

  • Configmaps are used to store configurations of the application. These configurations are non-sensitive configurations. For example, environment flags, versions, and so on.

  • Secrets are used to store sensitive configurations of the application such as DB credentials.

You can not create config maps and secrets from UI as of now. You can create them using Kubectl CLI only. This is out of the scope of this course. If you want you to read more about them here:

Complete the next lab to get more ideas about how GKE works.

Lab#

The following Google Kubernetes lab will introduce you to the deployment of the full-fledged application on GKE. Complete the following lab.

Deploy, scale, and update your website with Google Kubernetes Engine (GKE)

Well, this was a short review of the Kubernetes engine. Google Kubernetes Engine has a lot more to offer. But for this course how and what type of problems GKE solves is more than enough.

Cleanup#

Delete the created clusters to avoid extra charges. Use gcloud container clusters delete [cluster-name] --zone [zone] command to delete the clusters.

Terminal 1
Terminal

Click to Connect...

Creating Compute Instance Group

Introduction to Cloud Run